Goto

Collaborating Authors

 preference order


Plant-and-Steal: Truthful Fair Allocations via Predictions

Neural Information Processing Systems

We further show that the mechanism's performance degrades gracefully in the number of "mistakes" in the prediction; i.e., we interpolate between the two extremes: when there are no mistakes, and when there is a maximum number of mistakes.


How Hard is it to Explain Preferences Using Few Boolean Attributes?

Anzinger, Clemens, Chen, Jiehua, Hatschka, Christian, Sorge, Manuel, Temper, Alexander

arXiv.org Artificial Intelligence

We study the computational complexity of explaining preference data through Boolean attribute models (BAMs), motivated by extensive research involving attribute models and their promise in understanding preference structure and enabling more efficient decision-making processes. In a BAM, each alternative has a subset of Boolean attributes, each voter cares about a subset of attributes, and voters prefer alternatives with more of their desired attributes. In the BAM problem, we are given a preference profile and a number k, and want to know whether there is a Boolean k-attribute model explaining the profile. We establish a complexity dichotomy for the number of attributes k: BAM is linear-time solvable for $k \le 2$ but NP-complete for $k \ge 3$. The problem remains hard even when preference orders have length two. On the positive side, BAM becomes fixed-parameter tractable when parameterized by the number of alternatives m. For the special case of two voters, we provide a linear-time algorithm. We also analyze variants where partial information is given: When voter preferences over attributes are known (BAM WITH CARES) or when alternative attributes are specified (BAM WITH HAS), we show that for most parameters BAM WITH CARES is more difficult whereas BAM WITH HAS is more tractable except for being NP-hard even for one voter.



On Robust Popular Matchings with Tie-Bounded Preferences and Stable Matchings with Two-Sided Ties

De, Koustav

arXiv.org Artificial Intelligence

We are given a bipartite graph $G = \left( A \cup B, E \right)$. In the one-sided model, every $a \in A$ (often called agents) ranks its neighbours $z \in N_{a}$ strictly, and no $b \in B$ has any preference order over its neighbours $y \in N_{b}$, and vertices in $B$ abstain from casting their votes to matchings. In the two-sided model with one-sided ties, every $a \in A$ ranks its neighbours $z \in N_{a}$ strictly, and every $b \in B$ puts all of its neighbours into a single large tie, i.e., $b \in B$ prefers every $y \in N_{b}$ equally. In this two-sided model with one-sided ties, when two matchings compete in a majority election, $b \in B$ abstains from casting its vote for a matching when both the matchings saturate $b$ or both leave $b$ unsaturated; else $b$ prefers the matching where it is saturated. A popular matching $M$ is \emph{robust} if it remains popular among multiple instances. We have analysed the cases when a robust popular matching exists in the one-sided model where only one agent alters her preference order among the instances, and we have proposed a polynomial-time algorithm to decide if there exists a robust popular matching when instances differ only with respect to the preference orders of a single agent. We give a simple characterisation of popular matchings in the two-sided model with one-sided ties. We show that in the two-sided model with one-sided ties, if the input instances differ only with respect to the preference orders of a single agent, there is a polynomial-time algorithm to decide whether there exists a robust popular matching. We have been able to decide the stable matching problem in bipartite graphs $G = (A \cup B, E)$ where \textit{both} sides have weak preferences (ties allowed), with the restriction that every tie has length at most $k$.


Plant-and-Steal: Truthful Fair Allocations via Predictions

Neural Information Processing Systems

We further show that the mechanism's performance degrades gracefully in the number of "mistakes" in the prediction; i.e., we interpolate between the two extremes: when there are no mistakes, and when there is a maximum number of mistakes.


Identifying Imperfect Clones in Elections

Faliszewski, Piotr, Janeczko, Lukasz, Lisowski, Grzegorz, Pekarkova, Kristyna, Schlotter, Ildiko

arXiv.org Artificial Intelligence

A perfect clone in an ordinal election (i.e., an election where the voters rank the candidates in a strict linear order) is a set of candidates that each voter ranks consecutively. We consider different relaxations of this notion: independent or subelection clones are sets of candidates that only some of the voters recognize as a perfect clone, whereas approximate clones are sets of candidates such that every voter ranks their members close to each other, but not necessarily consecutively. We establish the complexity of identifying such imperfect clones, and of partitioning the candidates into families of imperfect clones. We also study the parameterized complexity of these problems with respect to a set of natural parameters such as the number of voters, the size or the number of imperfect clones we are searching for, or their level of imperfection.



Drawing a Map of Elections

Szufa, Stanisław, Boehmer, Niclas, Bredereck, Robert, Faliszewski, Piotr, Niedermeier, Rolf, Skowron, Piotr, Slinko, Arkadii, Talmon, Nimrod

arXiv.org Artificial Intelligence

Our main contribution is the introduction of the map of elections framework. A map of elections consists of three main elements: (1) a dataset of elections (i.e., collections of ordinal votes over given sets of candidates), (2) a way of measuring similarities between these elections, and (3) a representation of the elections in the 2D Euclidean space as points, so that the more similar two elections are, the closer are their points. In our maps, we mostly focus on datasets of synthetic elections, but we also show an example of a map over real-life ones. To measure similarities, we would have preferred to use, e.g., the isomorphic swap distance, but this is infeasible due to its high computational complexity. Hence, we propose polynomial-time computable positionwise distance and use it instead. Regarding the representations in 2D Euclidean space, we mostly use the Kamada-Kawai algorithm, but we also show two alternatives. We develop the necessary theoretical results to form our maps and argue experimentally that they are accurate and credible. Further, we show how coloring the elections in a map according to various criteria helps in analyzing results of a number of experiments. In particular, we show colorings according to the scores of winning candidates or committees, running times of ILP-based winner determination algorithms, and approximation ratios achieved by particular algorithms.


Refining Alignment Framework for Diffusion Models with Intermediate-Step Preference Ranking

Ren, Jie, Zhang, Yuhang, Liu, Dongrui, Zhang, Xiaopeng, Tian, Qi

arXiv.org Artificial Intelligence

Direct preference optimization (DPO) has shown success in aligning diffusion models with human preference. Previous approaches typically assume a consistent preference label between final generations and noisy samples at intermediate steps, and directly apply DPO to these noisy samples for fine-tuning. However, we theoretically identify inherent issues in this assumption and its impacts on the effectiveness of preference alignment. We first demonstrate the inherent issues from two perspectives: gradient direction and preference order, and then propose a Tailored Preference Optimization (TailorPO) framework for aligning diffusion models with human preference, underpinned by some theoretical insights. Our approach directly ranks intermediate noisy samples based on their step-wise reward, and effectively resolves the gradient direction issues through a simple yet efficient design. Additionally, we incorporate the gradient guidance of diffusion models into preference alignment to further enhance the optimization effectiveness. Experimental results demonstrate that our method significantly improves the model's ability to generate aesthetically pleasing and human-preferred images.


Extending choice assessments to choice functions: An algorithm for computing the natural extension

Decadt, Arne, Erreygers, Alexander, De Bock, Jasper

arXiv.org Artificial Intelligence

This leads to a single optimal decision, or a set of optimal decisions all of which are equivalent. In the theory of imprecise probabilities, where multiple probabilistic models are considered simultaneously, this decision rule can be generalised in multiple ways; Troffaes [1] provides a nice overview. A typical feature of the resulting decision rules is that they will not always yield a single optimal decision, as a decision that is optimal in one probability model may for example be suboptimal in another. We here take this generalisation yet another step further by adopting the theory of choice functions: a mathematical framework for decision-making that incorporates several (imprecise) decision rules as special cases, including the classical approach of maximising expected utility [2, 3, 4]. An important feature of this framework of choice functions is that it allows one to impose axioms directly on the decisions that are represented by such a choice function [3, 4, 5].